I found about the course from Kimmo. I am eager to learn but I know there is much to learn for me as this is something I have no previous experience in. I am expecting to learn to use R at least in a level that I will understand where to searh for help and know how to communicate with statisticians about the results of my studies. If I have enough time, I might even learn to really analyze data by myself as well.
The way the book R for health data science has been written is really nice. At first I had problems understanding what I should do with the Exercise set 1, but then I got help during the computer clinic session from assistant teachers. I really liked the data visualization parts and I started to understand the basic functions. I had no previous experience in the markdown language so I did not understand what to do with this file either. So I needed to ask for help from my partner and I got advice to use https://markdownlivepreview.com/ and now I understand how this works. But now that I read the instructions further, I noticed the instuctions were available there. I was just too fast :) I had some problems in saving my work done in R studio to github, I hope I will learn this next week.
This is a link to my Repository.
# This is a so-called "R chunk" where you can write R code.
date()
## [1] "Mon Nov 28 20:27:17 2022"
The text continues here.
date()
## [1] "Mon Nov 28 20:27:17 2022"
I have done the data wrangling exercise - I had problems but I got help. With the existing instructions I would not have managed it. But I got something done. When doing the analysis exercise I noticed there was something wrong with my data or the way I had saved it or the way I tried to read it to rstudio so I used the data from the link given in the instructions. I compared the data with my data and it looks the same, but something is going on.
This is data from [ASSIST 2014 International survey of Approaches to Learning by Vehkalahti] (https://www.mv.helsinki.fi/home/kvehkala/JYTmooc/JYTOPKYS2-meta.txt)
Here I have set the working directory and read the csv file (noticed later it did not work) and later read the pre-exsting table
library(tidyverse)
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.2 ──
## ✔ ggplot2 3.3.6 ✔ purrr 0.3.5
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.4.1
## ✔ readr 2.1.3 ✔ forcats 0.5.2
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
setwd("C:/Users/riikk/Documents/Open data science/IODS-project/Data")
#students2014 = read_csv("learning2014.csv") This did not work so I used # later
students2014 <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/learning2014.txt",
sep = ",", header = TRUE)
dim(students2014)
## [1] 166 7
str(students2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
Above I have also explored the dimensions of the data. Dimensions: 166_7. Structure 166 obs. of 7 variables
The data contains seven variables:
Next I have summmarized the data both in text and visually. EG. The age of the participants varied between 17 and 55 (mean 25,51), the exam points varied between 7 and 33 (mean 22.72).
summary(students2014)
## gender age attitude deep
## Length:166 Min. :17.00 Min. :1.400 Min. :1.583
## Class :character 1st Qu.:21.00 1st Qu.:2.600 1st Qu.:3.333
## Mode :character Median :22.00 Median :3.200 Median :3.667
## Mean :25.51 Mean :3.143 Mean :3.680
## 3rd Qu.:27.00 3rd Qu.:3.700 3rd Qu.:4.083
## Max. :55.00 Max. :5.000 Max. :4.917
## stra surf points
## Min. :1.250 Min. :1.583 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:2.417 1st Qu.:19.00
## Median :3.188 Median :2.833 Median :23.00
## Mean :3.121 Mean :2.787 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :5.000 Max. :4.333 Max. :33.00
library(GGally)
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
library(ggplot2)
p <- ggpairs(students2014, mapping = aes(col = gender), lower = list(combo = wrap("facethist", bins = 20)))
graphical overview
When exploring the data visually, it looks like there were less male participants, participating women were a bit younger, women seem to have worse attitudes. There seems to be statistically significant correlations between attitude and points, surf and attitude in males, surf and deep in males. In attitudes there seems to be a difference in distributions of male vs. female participants. There seems to be a lot of outliers in age, and also some in deep and attitude (male).
my_model <- lm(points ~ attitude + stra + surf, data = students2014)
par(mar=c(1,1,1,1))
summary(my_model)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = students2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
I chose three variables (attitude, stra, surf) and fit a regression model where exam points was the target (dependent, outcome) variable. This is multivariable linear regression. With this we explore the relationship between the exploratory variables (attitude, stra and surf) and the exam points.
Summary of the fitted model: Coefficients: Estimate Std. Error t
value Pr(>|t|)
(Intercept) 11.0171 3.6837 2.991 0.00322 ** attitude 3.3952 0.5741 5.913
1.93e-08 *** stra 0.8531 0.5416 1.575 0.11716
surf -0.5861 0.8014 -0.731 0.46563
— Signif. codes: 0 ‘’ 0.001 ‘’ 0.01 ‘’ 0.05
‘.’ 0.1 ‘ ’ 1
summary of the fitted model
Only attitudes had a statistically significant relationship with points, so I removed the other variables and ran the model without them so this was univariable linear regression.
my_model <- lm(points ~ attitude, data = students2014)
summary(my_model)
##
## Call:
## lm(formula = points ~ attitude, data = students2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.6372 1.8303 6.358 1.95e-09 ***
## attitude 3.5255 0.5674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
“R-squared is another measure of how close the data are to the fitted line. 0.0 indicates that none of the variability in the dependent is explained by the explanatory (no relationship between data points and fitted line) and 1.0 indicates that the model explains all of the variability in the dependent” In this case the multiple R-squared is 0.1906, meaning that only a small part of the variability is explained by attitude.
Last I have produced diagnostic plots: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage. Quantile-quantile is a graphical method for comparing the distribution of our own data to a theoretical distribution, such as the normal distribution. A Q-Q plot simply plots the quantiles for our data against the theoretical quantiles for a particular distribution (the default shown below is the normal distribution). If our data follow that distribution (e.g., normal), then our data points fall on the theoretical straight line.
plot (my_model, which = c(1,2,5))
residuals vs fitted
normal q-q
residuals vs leverage
The first one looks quite nice to me, as the distance of the observations from the fitted line is about the same on the left as on the right. In the second pic at the right end of the image the residuals diverge from the straight line, so this is not normally distributed totally. As I don’t have much experience in this, I don’t know whether this is a problem or not as most of this fits the line nicely.
I don’t know why this prints the last graphs twice.
And then I ran out of time :D
date()
## [1] "Mon Nov 28 20:27:19 2022"
alc <- read.table("https://raw.githubusercontent.com/KimmoVehkalahti/Helsinki-Open-Data-Science/master/datasets/alc.csv", sep = ",", header = TRUE)
colnames(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "guardian" "traveltime" "studytime" "schoolsup"
## [16] "famsup" "activities" "nursery" "higher" "internet"
## [21] "romantic" "famrel" "freetime" "goout" "Dalc"
## [26] "Walc" "health" "failures" "paid" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
The dataset explores student achievement in secondary education of two Portuguese schools. The data was collected using school reports and questionnaires. The performance in mathematics and portuguese language have been measudred and in addition the data included attributes such as demographic, social and school related features. In the dataset variable ‘alc_use’ is the average of ‘Dalc’ and ‘Walc’variables. ’high_use’ is TRUE if ‘alc_use’ is higher than 2 and FALSE otherwise.
More detailed description of the dataset can be read at: https://archive.ics.uci.edu/ml/datasets/Student+Performance
Some of the answers are binary (eg. yes/no), and others numeric (eg. on a scale of 1-5) or nominal (eg. mother/father/other)
I chose four variables: studytime, activities, health, goout
Comment on your findings and compare the results of your exploration to your previously stated hypotheses.
library(tidyverse)
library(dplyr)
library(ggplot2)
library(readr)
library(gmodels)
## Cross tabulations
CrossTable(alc$high_use, alc$studytime)
##
##
## Cell Contents
## |-------------------------|
## | N |
## | Chi-square contribution |
## | N / Row Total |
## | N / Col Total |
## | N / Table Total |
## |-------------------------|
##
##
## Total Observations in Table: 370
##
##
## | alc$studytime
## alc$high_use | 1 | 2 | 3 | 4 | Row Total |
## -------------|-----------|-----------|-----------|-----------|-----------|
## FALSE | 56 | 128 | 52 | 23 | 259 |
## | 2.314 | 0.017 | 2.381 | 0.889 | |
## | 0.216 | 0.494 | 0.201 | 0.089 | 0.700 |
## | 0.571 | 0.692 | 0.867 | 0.852 | |
## | 0.151 | 0.346 | 0.141 | 0.062 | |
## -------------|-----------|-----------|-----------|-----------|-----------|
## TRUE | 42 | 57 | 8 | 4 | 111 |
## | 5.400 | 0.041 | 5.556 | 2.075 | |
## | 0.378 | 0.514 | 0.072 | 0.036 | 0.300 |
## | 0.429 | 0.308 | 0.133 | 0.148 | |
## | 0.114 | 0.154 | 0.022 | 0.011 | |
## -------------|-----------|-----------|-----------|-----------|-----------|
## Column Total | 98 | 185 | 60 | 27 | 370 |
## | 0.265 | 0.500 | 0.162 | 0.073 | |
## -------------|-----------|-----------|-----------|-----------|-----------|
##
##
CrossTable(alc$high_use, alc$activities)
##
##
## Cell Contents
## |-------------------------|
## | N |
## | Chi-square contribution |
## | N / Row Total |
## | N / Col Total |
## | N / Table Total |
## |-------------------------|
##
##
## Total Observations in Table: 370
##
##
## | alc$activities
## alc$high_use | no | yes | Row Total |
## -------------|-----------|-----------|-----------|
## FALSE | 120 | 139 | 259 |
## | 0.224 | 0.210 | |
## | 0.463 | 0.537 | 0.700 |
## | 0.670 | 0.728 | |
## | 0.324 | 0.376 | |
## -------------|-----------|-----------|-----------|
## TRUE | 59 | 52 | 111 |
## | 0.523 | 0.490 | |
## | 0.532 | 0.468 | 0.300 |
## | 0.330 | 0.272 | |
## | 0.159 | 0.141 | |
## -------------|-----------|-----------|-----------|
## Column Total | 179 | 191 | 370 |
## | 0.484 | 0.516 | |
## -------------|-----------|-----------|-----------|
##
##
CrossTable(alc$high_use, alc$health)
##
##
## Cell Contents
## |-------------------------|
## | N |
## | Chi-square contribution |
## | N / Row Total |
## | N / Col Total |
## | N / Table Total |
## |-------------------------|
##
##
## Total Observations in Table: 370
##
##
## | alc$health
## alc$high_use | 1 | 2 | 3 | 4 | 5 | Row Total |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## FALSE | 35 | 28 | 61 | 45 | 90 | 259 |
## | 0.243 | 0.067 | 0.446 | 0.059 | 0.653 | |
## | 0.135 | 0.108 | 0.236 | 0.174 | 0.347 | 0.700 |
## | 0.761 | 0.667 | 0.762 | 0.726 | 0.643 | |
## | 0.095 | 0.076 | 0.165 | 0.122 | 0.243 | |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## TRUE | 11 | 14 | 19 | 17 | 50 | 111 |
## | 0.568 | 0.156 | 1.042 | 0.138 | 1.524 | |
## | 0.099 | 0.126 | 0.171 | 0.153 | 0.450 | 0.300 |
## | 0.239 | 0.333 | 0.237 | 0.274 | 0.357 | |
## | 0.030 | 0.038 | 0.051 | 0.046 | 0.135 | |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## Column Total | 46 | 42 | 80 | 62 | 140 | 370 |
## | 0.124 | 0.114 | 0.216 | 0.168 | 0.378 | |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
##
##
CrossTable(alc$high_use, alc$goout)
##
##
## Cell Contents
## |-------------------------|
## | N |
## | Chi-square contribution |
## | N / Row Total |
## | N / Col Total |
## | N / Table Total |
## |-------------------------|
##
##
## Total Observations in Table: 370
##
##
## | alc$goout
## alc$high_use | 1 | 2 | 3 | 4 | 5 | Row Total |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## FALSE | 19 | 82 | 97 | 40 | 21 | 259 |
## | 0.842 | 2.928 | 2.012 | 3.904 | 6.987 | |
## | 0.073 | 0.317 | 0.375 | 0.154 | 0.081 | 0.700 |
## | 0.864 | 0.845 | 0.808 | 0.513 | 0.396 | |
## | 0.051 | 0.222 | 0.262 | 0.108 | 0.057 | |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## TRUE | 3 | 15 | 23 | 38 | 32 | 111 |
## | 1.964 | 6.832 | 4.694 | 9.109 | 16.303 | |
## | 0.027 | 0.135 | 0.207 | 0.342 | 0.288 | 0.300 |
## | 0.136 | 0.155 | 0.192 | 0.487 | 0.604 | |
## | 0.008 | 0.041 | 0.062 | 0.103 | 0.086 | |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
## Column Total | 22 | 97 | 120 | 78 | 53 | 370 |
## | 0.059 | 0.262 | 0.324 | 0.211 | 0.143 | |
## -------------|-----------|-----------|-----------|-----------|-----------|-----------|
##
##
studytime_barplot <- ggplot(data = alc, aes(x=studytime, fill = high_use))
studytime_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")
activities_barplot <- ggplot(data = alc, aes(x=activities, fill = high_use))
activities_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")
health_barplot <- ggplot(data = alc, aes(x=health, fill = high_use))
health_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")
goout_barplot <- ggplot(data = alc, aes(x=goout, fill = high_use))
goout_barplot + geom_bar(position=position_dodge()) + labs(y="Number of students")
## Box plots
studytime_boxplot <- ggplot(alc, aes(x = high_use, y = studytime))
# define the plot as a boxplot and draw it
studytime_boxplot + geom_boxplot()
activities_boxplot <- ggplot(alc, aes(x = high_use, y = activities))
activities_boxplot + geom_boxplot()
Boxplot makes no sense in this case as activities is a binary
variable
health_boxplot <- ggplot(alc, aes(x = high_use, y = health))
health_boxplot + geom_boxplot()
goout_boxplot <- ggplot(alc, aes(x = high_use, y = goout))
goout_boxplot + geom_boxplot()
There seems to be an association between high_use and goout variables as
well as high_use and studytime supporting my initial hypotheses. There
doesn’t seem to be an association between health and amount alcohol use
and activities and alcohol use.
Fitting the model
m <- glm(high_use ~ studytime + activities + health + goout, data = alc, family = "binomial")
summary(m)
##
## Call:
## glm(formula = high_use ~ studytime + activities + health + goout,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.7878 -0.7844 -0.5507 0.9040 2.6331
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -2.54832 0.63687 -4.001 6.3e-05 ***
## studytime -0.58926 0.16676 -3.534 0.00041 ***
## activitiesyes -0.30072 0.25281 -1.190 0.23423
## health 0.14004 0.09029 1.551 0.12090
## goout 0.76189 0.11894 6.406 1.5e-10 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 383.67 on 365 degrees of freedom
## AIC: 393.67
##
## Number of Fisher Scoring iterations: 4
The variables I thought were associated with high_use (studytime and goout) based on visual observation seem to be statistically significant as well based on this summary. For studytime there seems to be a negative association with high_use, meaning if the alcohol use is high, the probability of studying less is higher. High alcol use and going out with friends seem to be positively associated meaning that if the alcohol use is high, the probability of going more out with friends is higher.
###Presenting and interpreting the coefficients of the model as odds ratios
# compute odds ratios (OR)
OR <- coef(m) %>% exp
# compute confidence intervals (CI)
CI <- confint(m)%>% exp
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 0.07821325 0.02178097 0.2659023
## studytime 0.55473498 0.39563591 0.7621319
## activitiesyes 0.74028531 0.44966067 1.2138630
## health 1.15032068 0.96559347 1.3768670
## goout 2.14231738 1.70763719 2.7248784
In activities and health the confidence interval crosses 1, which means there is no association If the alcohol use is high, the probability of going out with friends is 2,14x higher. So none of the variables has very high odds ratio.
# fit the model
m <- glm(high_use ~ studytime + goout, data = alc, family = "binomial")
# predict() the probability of high_use
probabilities <- predict(m, type = "response")
library(dplyr)
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = probability > 0.5)
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction) %>% addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 238 21 259
## TRUE 70 41 111
## Sum 308 62 370
This means that my model of two statistically significant variables is not great at predicting high alcohol use. Other predictors would need to be added to create more accurate prediction. Among all predictions 91 were inaccurate.
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$high_use, prob = alc$probability)
## [1] 0.2459459
Computing average number of wrong predictions by random guess (uniform distribution from 0.0 to 1.0)
alc <- mutate(alc, random_guess = runif(n()))
loss_func(class = alc$high_use, prob = alc$random_guess)
## [1] 0.4891892
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = nrow(alc))
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2459459
This tells that 25% of predictions are wrong in this model. Random guess was wrong about 50% of the time and my model was wrong 25% of the time
# K-fold cross-validation
library(boot)
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = nrow(alc))
# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2459459
#10-fold cross validation
cv <- cv.glm(data = alc, cost = loss_func, glmfit = m, K = 10)
cv$delta[1]
## [1] 0.2486486
There was no big difference between K-fold cross-validation and 10-fold cross-validation. My model is slightly better than the model introduced in the exercise set. ***
date()
## [1] "Mon Nov 28 20:27:25 2022"
library(tidyverse)
#install.packages(c("MASS", "corrplot"))
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
data("Boston")
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
This is a dataset already loaded in R that is used often for teaching purposes. The dataset has 506 observations of 14 variables, 506 rows and 14 columns. The dataset explores Housing Values in Suburbs of Boston. The variables include for example per capita crime rate by town, weighted mean of distances to five Boston employment centres and pupil-teacher ratio by town. More about the dataset can be read here: https://stat.ethz.ch/R-manual/R-devel/library/MASS/html/Boston.html
library(corrplot)
## corrplot 0.92 loaded
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
pairs(Boston)
corr_boston <- cor(Boston)
corrplot(corr_boston, method="circle", type = "upper", cl.pos = "r", tl.pos = "d", tl.cex = 0.7)
Relationships between the variables:
The highest positive correlation is between rad and tax (better access to radial highways is correlated with higher property tax rate, makes sense!). High negative correlations are between age and dis, lstat and med, dis and nox (this has highest correlation - the farther away you are from employment centers the less nitroxide oxid there is - makes sense!) as well as indus and dis.
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
class(boston_scaled)
## [1] "matrix" "array"
boston_scaled <- as.data.frame(boston_scaled)
The standard score or Z-score is the number of standard deviations by which the value of a raw score is above or below the mean value of what is being observed or measured. Scale function does this -> the variables become more similar and easier to compare
boston_scaled$crim <- as.numeric(boston_scaled$crim)
summary(boston_scaled$crim)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -0.419367 -0.410563 -0.390280 0.000000 0.007389 9.924110
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
table(crime)
## crime
## low med_low med_high high
## 127 126 126 127
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
n <- nrow(boston_scaled)
ind <- sample(n, size = n * 0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
correct_classes <- test$crime
test <- dplyr::select(test, -crime)
correct_classes
## [1] low med_high med_high med_low med_low med_low low low
## [9] low med_low med_low med_low low med_low med_low med_low
## [17] med_low med_low med_low med_high med_high med_high med_high med_high
## [25] med_high med_high med_high med_low med_low low low med_low
## [33] low low med_low med_high med_low med_low low med_high
## [41] med_high med_low med_low med_low med_low med_low med_high low
## [49] low low low med_low low med_low low med_high
## [57] med_high med_high med_low med_high med_high low low low
## [65] low low low low low low med_low high
## [73] high high high high high high high high
## [81] high high high high high high high high
## [89] high high high high high high high high
## [97] med_high med_low med_high med_low low low
## Levels: low med_low med_high high
lda.fit <- lda(crime ~ ., data = train)
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2475248 0.2400990 0.2599010 0.2524752
##
## Group means:
## zn indus chas nox rm age
## low 0.96908175 -0.8972394 -0.11484506 -0.9149826 0.4997965 -0.8764085
## med_low -0.09537852 -0.2758530 0.01179157 -0.5358050 -0.1185925 -0.3354278
## med_high -0.37208445 0.1550500 0.17762524 0.4043688 0.1573490 0.4529441
## high -0.48724019 1.0171096 -0.04073494 1.0796632 -0.4441616 0.8012932
## dis rad tax ptratio black lstat
## low 0.8864090 -0.6924575 -0.7678258 -0.49171380 0.3767164 -0.7885751
## med_low 0.3400790 -0.5426122 -0.4681892 -0.05517512 0.3084939 -0.1265906
## med_high -0.3979917 -0.4098252 -0.3045762 -0.29267657 0.1168925 0.0128277
## high -0.8596795 1.6382099 1.5141140 0.78087177 -0.5749256 0.8883415
## medv
## low 0.59988379
## med_low 0.02008452
## med_high 0.19742792
## high -0.69101701
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.1096713784 0.70676960 -1.0080876
## indus -0.0063257266 -0.04346413 0.2138675
## chas -0.0750447195 -0.08131247 0.1458687
## nox 0.3979655057 -0.81146906 -1.1964771
## rm -0.1277005072 -0.10826135 -0.2069114
## age 0.2204131273 -0.32068736 -0.2574227
## dis -0.1132729982 -0.15795563 0.1731797
## rad 3.1730266374 1.02315042 -0.1820877
## tax 0.0002695476 -0.10727809 0.6412228
## ptratio 0.1017383304 0.02559729 -0.2719166
## black -0.1198263913 0.04840651 0.1434611
## lstat 0.1870876958 -0.19463441 0.4026789
## medv 0.1607367924 -0.30599188 -0.1300195
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9463 0.0416 0.0121
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
classes <- as.numeric(train$crime)
plot(lda.fit, dimen = 2)
lda.arrows(lda.fit, myscale = 1)
I had saved the crime categories already before
lda.pred <- predict(lda.fit, newdata = test)
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 16 10 1 0
## med_low 5 22 2 0
## med_high 1 7 12 1
## high 0 0 0 25
The model mostly predicts correctly
library(MASS)
data("Boston")
boston_scaled <- scale(Boston)
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
class(boston_scaled)
## [1] "matrix" "array"
boston_scaled <- as.data.frame(boston_scaled)
dist_eu <- dist(boston_scaled)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
dist_man <- dist(boston_scaled, method = "manhattan")
Running k-means algorithm on the dataset.
set.seed(123)
k_max <- 10
twcss <- sapply(1:k_max, function(k){kmeans(boston_scaled, k)$tot.withinss})
qplot(x = 1:k_max, y = twcss, geom = 'line')
km <- kmeans(boston_scaled, centers = 6)
Around 6 clusters seems optimal.
pairs(boston_scaled, col = boston_scaled$cluster)
This is a very busy plot so it is hard to read - I think I followed the instructions so I don’t know how to fix it - I gave up with interpreting the results
library(MASS)
library(ggplot2)
set.seed(123)
# load and scale data
data("Boston")
boston_scaled <- scale(Boston)
boston_scaled <- as.data.frame(boston_scaled)
# perform k-means with 3 clusters. add cluster as a new column
km <- kmeans(Boston, centers = 3)
boston_scaled$cluster <- km$cluster
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# fit the model and plot
lda.fit = lda(cluster ~ ., data=boston_scaled)
plot(lda.fit, dimen = 2)
lda.arrows(lda.fit, myscale = 2)
Vectors rad, tax and black are clearly visible and the are strongest determinants in dividing variables in different clusters. Vectors rad ja tax point almost to the same direction which means that they don’t have independent prediction power but as seen earlier they are correlated they might work in combination. Vector black goes to another direction, so it seems it is independent of the others.
scale data and cluster it. we need at least 4 clusters for the 3D plot and fit the model
boston_scaled <- as.data.frame(scale(Boston))
km <- kmeans(boston_scaled, centers = 4)
boston_scaled$cluster <- km$cluster
lda.fit <- lda(cluster ~ ., data = boston_scaled)
# add categorical variable "crime"
bins <- quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high"))
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled <- data.frame(boston_scaled, crime)
# create train and test data
ind <- sample(nrow(boston_scaled), size = nrow(boston_scaled) * 0.9)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
# select predictors
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 455 14
dim(lda.fit$scaling)
## [1] 14 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
# plot 3D, color set to crime classes of train data
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crime)
# plot 3D, color set to clusters of train data
train$cluster <- as.factor(train$cluster)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$cluster)
In the second one the colour is set according to the crime rate and in the first one clusters of the k-means. In many parts these are similar though (cluster1 seems to include the same observations as the ones with the high crime rate)